2 research outputs found

    Sign Language Recognition

    Get PDF
    This chapter covers the key aspects of sign-language recognition (SLR), starting with a brief introduction to the motivations and requirements, followed by a précis of sign linguistics and their impact on the field. The types of data available and the relative merits are explored allowing examination of the features which can be extracted. Classifying the manual aspects of sign (similar to gestures) is then discussed from a tracking and non-tracking viewpoint before summarising some of the approaches to the non-manual aspects of sign languages. Methods for combining the sign classification results into full SLR are given showing the progression towards speech recognition techniques and the further adaptations required for the sign specific case. Finally the current frontiers are discussed and the recent research presented. This covers the task of continuous sign recognition, the work towards true signer independence, how to effectively combine the different modalities of sign, making use of the current linguistic research and adapting to larger more noisy data set

    Discriminating features learning in hand gesture classification

    No full text
    The advent and popularity of Kinect provides a new choice and opportunity for hand gesture recognition (HGR) research. In this study, the authors propose a discriminating features extraction for HGR, in which features from red, green and blue (RGB) images and depth images are both explored. More specifically, histogram of oriented gradient feature, local binary pattern feature, structure feature and three‐dimensional voxel feature are first extracted from RGB images and depth images, then these features are further reduced with a novel deflation orthogonal discriminant analysis, which enhances the discriminative ability of the features with supervised subspace projection. The extensive experimental results show that the proposed method improves the HGR performance significantly
    corecore